We demonstrate that self-learning techniques like entropy minimization and pseudo-labeling are simple and effective at improving performance of a deployed computer vision model under systematic domain shifts. We conduct a wide range of large-scale experiments and show consistent improvements irrespective of the model architecture, the pre-training technique or the type of distribution shift. At the same time, self-learning is simple to use in practice because it does not require knowledge or access to the original training data or scheme, is robust to hyperparameter choices, is straight-forward to implement and requires only a few adaptation epochs. This makes self-learning techniques highly attractive for any practitioner who applies machine learning algorithms in the real world. We present state-of-the-art adaptation results on CIFAR10-C (8.5% error), ImageNet-C (22.0% mCE), ImageNet-R (17.4% error) and ImageNet-A (14.8% error), theoretically study the dynamics of self-supervised adaptation methods and propose a new classification dataset (ImageNet-D) which is challenging even with adaptation.
translated by 谷歌翻译
对于顺序数据,更改点是突然的制度交换机的时刻。这种更改出现在不同的场景中,包括复杂的视频监控,并且我们需要尽可能快地检测它们。由于没有足够的数据表示学习程序,改变点检测(CPD)的经典方法对于半结构化的顺序数据而言。我们提出了一个原则性的损失函数,近似于经典严谨的解决方案,但有所不同,并实现了代表学习。此损耗函数余额将检测延迟和时间变平衡,以为CPD提供成功的模型。在实验中,我们考虑简单的系列和更复杂的真实图像序列和具有变化点的视频。对于更复杂的问题,我们表明我们需要针对CPD任务的特殊性量身定制的更有意义的陈述。考虑到这一点,所提出的方法临时改善了CPD的基线结果,以了解各种数据类型。对于爆炸检测,与基线相比,我们的方法的F1分数为0.54美元,价格为0.46美元和0.30美元。
translated by 谷歌翻译